Poole
Jurassic Coast rockfall captured on video
A visitor has called it a miracle no-one was hurt when a section of cliff collapsed on to a beach on Dorset's Jurassic Coast. Suzanne Sears, from Hemel Hempstead in Hertfordshire, was taking a walk near West Bay when she heard a deep cracking noise coming from the cliffs before the rockfall shortly after 16:00 GMT on Tuesday. The Maritime and Coastguard Agency confirmed a rescue team was sent to a report of a cliff fall at West Bay and no one was found to be in distress. A woman was forced to run to safety as the Dorset cliff collapsed on Saturday. The kayakers spotted the creature after hearing it exhaling loudly off Portland Castle beach.
- Europe > United Kingdom > England > Hertfordshire (0.25)
- North America > United States (0.16)
- North America > Central America (0.15)
- (17 more...)
LGM: Enhancing Large Language Models with Conceptual Meta-Relations and Iterative Retrieval
Lei, Wenchang, Zou, Ping, Wang, Yue, Sun, Feng, Zhao, Lei
Large language models (LLMs) exhibit strong semantic understanding, yet struggle when user instructions involve ambiguous or conceptually misaligned terms. We propose the Language Graph Model (LGM) to enhance conceptual clarity by extracting meta-relations-inheritance, alias, and composition-from natural language. The model further employs a reflection mechanism to validate these meta-relations. Leveraging a Concept Iterative Retrieval Algorithm, these relations and related descriptions are dynamically supplied to the LLM, improving its ability to interpret concepts and generate accurate responses. Unlike conventional Retrieval-Augmented Generation (RAG) approaches that rely on extended context windows, our method enables large language models to process texts of any length without the need for truncation. Experiments on standard benchmarks demonstrate that the LGM consistently outperforms existing RAG baselines.
- North America > United States (0.14)
- Europe > France (0.04)
- Asia > China > Hunan Province (0.04)
- (5 more...)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Consumer Health (1.00)
- Education > Health & Safety > School Nutrition (1.00)
- (2 more...)
LLM-D12: A Dual-Dimensional Scale of Instrumental and Relational Dependencies on Large Language Models
Yankouskaya, Ala, Babiker, Areej B., Rizvi, Syeda W. F., Alshakhsi, Sameha, Liebherr, Magnus, Ali, Raian
There is growing interest in understanding how people interact with large language models (LLMs) and whether such models elicit dependency or even addictive behaviour. Validated tools to assess the extent to which individuals may become dependent on LLMs are scarce and primarily build on classic behavioral addiction symptoms, adapted to the context of LLM use. We view this as a conceptual limitation, as the LLM-human relationship is more nuanced and warrants a fresh and distinct perspective. To address this gap, we developed and validated a new 12-item questionnaire to measure LLM dependency, referred to as LLM-D12. The scale was based on the authors' prior theoretical work, with items developed accordingly and responses collected from 526 participants in the UK. Exploratory and confirmatory factor analyses, performed on separate halves of the total sample using a split-sample approach, supported a two-factor structure: Instrumental Dependency (six items) and Relationship Dependency (six items). Instrumental Dependency reflects the extent to which individuals rely on LLMs to support or collaborate in decision-making and cognitive tasks. Relationship Dependency captures the tendency to perceive LLMs as socially meaningful, sentient, or companion-like entities. The two-factor structure demonstrated excellent internal consistency and clear discriminant validity. External validation confirmed both the conceptual foundation and the distinction between the two subscales. The psychometric properties and structure of our LLM-D12 scale were interpreted in light of the emerging view that dependency on LLMs does not necessarily indicate dysfunction but may still reflect reliance levels that could become problematic in certain contexts.
- Asia > Middle East > Qatar > Ad-Dawhah > Doha (0.04)
- Europe > United Kingdom > England > Dorset > Bournemouth (0.04)
- North America > United States > New York (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Education (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.67)
Distributed optimization: designed for federated learning
Guo, Wenyou, Qu, Ting, Pan, Chunrong, Huang, George Q.
--Federated Learning (FL), as a distributed collaborative Machine Learning (ML) framework under privacy-preserving constraints, has garnered increasing research attention in cross-organizational data collaboration scenarios. This paper proposes a class of distributed optimization algorithms based on the augmented Lagrangian technique, designed to accommodate diverse communication topologies in both centralized and decentralized FL settings. Furthermore, we develop multiple termination criteria and parameter update mechanisms to enhance computational efficiency, accompanied by rigorous theoretical guarantees of convergence. By generalizing the augmented Lagrangian relaxation through the incorporation of proximal relaxation and quadratic approximation, our framework systematically recovers a broad of classical unconstrained optimization methods, including proximal algorithm, classic gradient descent, and stochastic gradient descent, among others. Notably, the convergence properties of these methods can be naturally derived within the proposed theoretical framework. Numerical experiments demonstrate that the proposed algorithm exhibits strong performance in large-scale settings with significant statistical heterogeneity across clients. Such formulations, commonly referred to as consensus optimization problems, find widespread applications in interdisciplinary domains including distributed ML, collaborative sensing in sensor networks, and distributed parameter estimation [1]. This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant 52375498, and in part by the Fundamental Research Funds for the Central Universities under Grant 21623111. Ting Qu is with Guangdong International Cooperation Base of Science and Technology for GBA Smart Logistics, Jinan University, Zhuhai 519070, China, also with School of Intelligent Systems Science and Engineering, Jinan University, Zhuhai 519070, China, and also with Institute of Physical Internet, Jinan University, Zhuhai 519070, China (e-mail: quting@jnu.edu.cn).
- Asia > China > Guangdong Province > Zhuhai (0.64)
- Asia > China > Hong Kong (0.05)
- Asia > China > Guangdong Province > Guangzhou (0.04)
- (11 more...)
Structure Guided Prompt: Instructing Large Language Model in Multi-Step Reasoning by Exploring Graph Structure of the Text
Cheng, Kewei, Ahmed, Nesreen K., Willke, Theodore, Sun, Yizhou
Although Large Language Models (LLMs) excel at addressing straightforward reasoning tasks, they frequently struggle with difficulties when confronted by more complex multi-step reasoning due to a range of factors. Firstly, natural language often encompasses complex relationships among entities, making it challenging to maintain a clear reasoning chain over longer spans. Secondly, the abundance of linguistic diversity means that the same entities and relationships can be expressed using different terminologies and structures, complicating the task of identifying and establishing connections between multiple pieces of information. Graphs provide an effective solution to represent data rich in relational information and capture long-term dependencies among entities. To harness the potential of graphs, our paper introduces Structure Guided Prompt, an innovative three-stage task-agnostic prompting framework designed to improve the multi-step reasoning capabilities of LLMs in a zero-shot setting. This framework explicitly converts unstructured text into a graph via LLMs and instructs them to navigate this graph using task-specific strategies to formulate responses. By effectively organizing information and guiding navigation, it enables LLMs to provide more accurate and context-aware responses. Our experiments show that this framework significantly enhances the reasoning capabilities of LLMs, enabling them to excel in a broader spectrum of natural language scenarios.
- Europe > United Kingdom > England > Dorset > Poole (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Europe > United Kingdom > England > Dorset > Bournemouth (0.04)
- (7 more...)
- Leisure & Entertainment > Sports > Motorsports > Formula One (1.00)
- Government (1.00)
Dual input stream transformer for eye-tracking line assignment
Mercier, Thomas M., Budka, Marcin, Vasilev, Martin R., Kirkby, Julie A., Angele, Bernhard, Slattery, Timothy J.
We introduce a novel Dual Input Stream Transformer (DIST) for the challenging problem of assigning fixation points from eye-tracking data collected during passage reading to the line of text that the reader was actually focused on. This post-processing step is crucial for analysis of the reading data due to the presence of noise in the form of vertical drift. We evaluate DIST against nine classical approaches on a comprehensive suite of nine diverse datasets, and demonstrate DIST's superiority. By combining multiple instances of the DIST model in an ensemble we achieve an average accuracy of 98.5\% across all datasets. Our approach presents a significant step towards addressing the bottleneck of manual line assignment in reading research. Through extensive model analysis and ablation studies, we identify key factors that contribute to DIST's success, including the incorporation of line overlap features and the use of a second input stream. Through evaluation on a set of diverse datasets we demonstrate that DIST is robust to various experimental setups, making it a safe first choice for practitioners in the field.
- Europe > United Kingdom > England > Dorset > Bournemouth (0.05)
- Europe > United Kingdom > England > Dorset > Poole (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (13 more...)
- Education > Educational Setting > Higher Education (0.67)
- Health & Medicine > Therapeutic Area > Neurology (0.47)
It HAS to be Subjective: Human Annotator Simulation via Zero-shot Density Estimation
Wu, Wen, Chen, Wenlin, Zhang, Chao, Woodland, Philip C.
Human annotator simulation (HAS) serves as a cost-effective substitute for human evaluation such as data annotation and system assessment. Human perception and behaviour during human evaluation exhibit inherent variability due to diverse cognitive processes and subjective interpretations, which should be taken into account in modelling to better mimic the way people perceive and interact with the world. This paper introduces a novel meta-learning framework that treats HAS as a zeroshot density estimation problem, which incorporates human variability and allows for the efficient generation of human-like annotations for unlabelled test inputs. Under this framework, we propose two new model classes, conditional integer flows and conditional softmax flows, to account for ordinal and categorical annotations, respectively. The proposed method is evaluated on three real-world human evaluation tasks and shows superior capability and efficiency to predict the aggregated behaviours of human annotators, match the distribution of human annotations, and simulate the inter-annotator disagreements. Collecting human annotations or evaluations often requires substantial resources and may expose human annotators to distressing and harmful content in sensitive tasks (e.g., toxic speech detection, suicidal risk prediction, and depression detection). This inspires the exploration of human annotator simulation (HAS) as a scalable and cost-effective alternative, which facilitates large-scale dataset evaluation, benchmarking, and system comparisons. Variability is a unique aspect of real-world human evaluation, since individual variations in cognitive biases, cultural backgrounds, and personal experiences (Hirschberg et al., 2003; Wiebe et al., 2004; Haselton et al., 2015) can lead to variability in human interpretation (Lotfian & Busso, 2019; Mathew et al., 2021; Maniati et al., 2022). HAS aims to incorporate the variability present in human evaluation rather than solely relying on majority opinions, which mitigates potential biases and over-representation in scenarios where dominant opinions could potentially overshadow minority viewpoints (Dixon et al., 2018; Hutchinson et al., 2020), thus promoting fairness and inclusivity. In this work, we investigate HAS for the automatic generation of human-like annotations that take into account the variability in human evaluation.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.28)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- (27 more...)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.70)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
Election watchdog issues urgent warning over AI interference: 'race against the clock'
Tech Policy Center director Kara Fredrick explains how individuals and companies can mitigate the spread of misinformation by A.I. on'The Faulkner Focus.' British election regulators have urged politicians to pass new laws to limit spending on artificial intelligence (AI) as well as new requirements to identify AI-generated content. "The next U.K. general election is a ripe target for electronic disinformation given we are in the infancy of the AI age," Alan Mendoza, co-founder and executive director of the Henry Jackson Society, told Fox News Digital. "Many of the possible problems that may emerge have not even been considered." "As a result, we face a race against the clock to introduce appropriate protections, or run the nightmare risk of bad actors influencing campaigns and destroying public trust in our democratic process," he added.
- North America > United States (0.86)
- Asia > Middle East > Republic of Türkiye (0.31)
- Europe > United Kingdom > England > Staffordshire > Stoke-on-Trent (0.06)
- (3 more...)
- Media > News (1.00)
- Government > Voting & Elections (1.00)
- Government > Regional Government > North America Government > United States Government (0.85)
- Government > Regional Government > Asia Government > Middle East Government > Republic of Türkiye Government (0.31)
Out-of-Distribution Detection and Selective Generation for Conditional Language Models
Ren, Jie, Luo, Jiaming, Zhao, Yao, Krishna, Kundan, Saleh, Mohammad, Lakshminarayanan, Balaji, Liu, Peter J.
Machine learning algorithms typically assume independent and identically distributed samples in training and at test time. Much work has shown that high-performing ML classifiers can degrade significantly and provide overly-confident, wrong classification predictions, particularly for out-of-distribution (OOD) inputs. Conditional language models (CLMs) are predominantly trained to classify the next token in an output sequence, and may suffer even worse degradation on OOD inputs as the prediction is done auto-regressively over many steps. Furthermore, the space of potential low-quality outputs is larger as arbitrary text can be generated and it is important to know when to trust the generated output. We present a highly accurate and lightweight OOD detection method for CLMs, and demonstrate its effectiveness on abstractive summarization and translation. We also show how our method can be used under the common and realistic setting of distribution shift for selective generation (analogous to selective prediction for classification) of high-quality outputs, while automatically abstaining from low-quality ones, enabling safer deployment of generative language models.
- North America > United States > Missouri > Jackson County > Kansas City (0.14)
- North America > Canada > Alberta (0.14)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- (16 more...)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.68)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Health & Medicine (0.67)
- Leisure & Entertainment > Sports > Soccer (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Machine Translation (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.70)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Scaling up Stochastic Gradient Descent for Non-convex Optimisation
Mohamad, Saad, Alamri, Hamad, Bouchachia, Abdelhamid
Stochastic gradient descent (SGD) is a widely adopted iterative method for optimizing differentiable objective functions. In this paper, we propose and discuss a novel approach to scale up SGD in applications involving non-convex functions and large datasets. We address the bottleneck problem arising when using both shared and distributed memory. Typically, the former is bounded by limited computation resources and bandwidth whereas the latter suffers from communication overheads. We propose a unified distributed and parallel implementation of SGD (named DPSGD) that relies on both asynchronous distribution and lock-free parallelism. By combining two strategies into a unified framework, DPSGD is able to strike a better trade-off between local computation and communication. The convergence properties of DPSGD are studied for non-convex problems such as those arising in statistical modelling and machine learning. Our theoretical analysis shows that DPSGD leads to speed-up with respect to the number of cores and number of workers while guaranteeing an asymptotic convergence rate of $O(1/\sqrt{T})$ given that the number of cores is bounded by $T^{1/4}$ and the number of workers is bounded by $T^{1/2}$ where $T$ is the number of iterations. The potential gains that can be achieved by DPSGD are demonstrated empirically on a stochastic variational inference problem (Latent Dirichlet Allocation) and on a deep reinforcement learning (DRL) problem (advantage actor critic - A2C) resulting in two algorithms: DPSVI and HSA2C. Empirical results validate our theoretical findings. Comparative studies are conducted to show the performance of the proposed DPSGD against the state-of-the-art DRL algorithms.
- Asia > Middle East > Jordan (0.04)
- Europe > United Kingdom > England > Dorset > Bournemouth (0.04)
- Europe > United Kingdom > England > West Midlands > Coventry (0.04)
- Europe > United Kingdom > England > Dorset > Poole (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Gradient Descent (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)